feat: wire per-database S3 bucket name and publicUrlPrefix into presigned URL plugin#968
Merged
pyramation merged 1 commit intomainfrom Apr 9, 2026
Merged
Conversation
…gned URL plugin
The storage_module table already has endpoint, public_url_prefix, and
provider columns per-database, but the presigned URL plugin runtime
never used them — all S3 operations went through the single global
S3Config (from env vars).
This commit:
1. Adds BucketNameResolver type + resolveBucketName option to
PresignedUrlPluginOptions — lets each database resolve to its own
S3 bucket name while sharing a single S3 client (credentials).
2. Adds resolveS3ForDatabase() helper in both plugin.ts and
download-url-field.ts that overlays per-database bucket name
(from resolveBucketName) and publicUrlPrefix (from storageConfig)
onto the global S3Config.
3. Updates requestUploadUrl to generate presigned PUT URLs against
the per-database S3 bucket.
4. Updates confirmUpload to verify uploads (HeadObject) against the
per-database S3 bucket.
5. Updates downloadUrl field to:
- Always resolve storageConfig before building URLs
- Use per-database publicUrlPrefix for public file CDN URLs
- Use per-database bucket for presigned GET URLs
6. Adds createBucketNameResolver() in presigned-url-resolver.ts
that derives bucket names as {BUCKET_NAME}-{databaseId}.
7. Wires resolveBucketName into the ConstructivePreset.
S3 credentials (AWS_ACCESS_KEY/AWS_SECRET_KEY) remain global —
only the bucket name and publicUrlPrefix are per-database.
Contributor
🤖 Devin AI EngineerI'll be helping with this pull request! Here's what you should know: ✅ I will automatically:
Note: I can only respond to comments from users who have write access to this repository. ⚙️ Control Options:
|
5 tasks
devin-ai-integration Bot
pushed a commit
that referenced
this pull request
Apr 10, 2026
Adds a new integration test that exercises the full upload pipeline for both public and private files using real MinIO: - requestUploadUrl → PUT to presigned URL → confirmUpload - Tests public bucket (is_public=true) and private bucket (is_public=false) - Tests content-hash deduplication - Uses lazy S3 bucket provisioning (from PR #969) - Uses per-database bucket naming (from PR #968) Includes seed fixtures (simple-seed-storage) that create: - jwt_private schema with current_database_id() function - metaschema tables (database, schema, table, field) - services tables (apis, domains, api_schemas) - storage_module config row - storage tables (buckets, files, upload_requests) - Two buckets: public and private
4 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
The
storage_moduletable already hasendpoint,public_url_prefix, andprovidercolumns per-database, but the presigned URL plugin runtime ignored them — all S3 operations used the single globalS3Configfrom env vars (BUCKET_NAME,CDN_ENDPOINT,CDN_PUBLIC_URL_PREFIX).This PR wires up per-database bucket isolation:
BucketNameResolvertype +resolveBucketNameoption onPresignedUrlPluginOptions— a function(databaseId) => s3BucketNamethat the plugin calls on every requestresolveS3ForDatabase()helper — overlays per-database bucket name (fromresolveBucketName) andpublicUrlPrefix(fromstorageConfig) onto the global S3Config. S3 credentials/client remain shared.requestUploadUrlnow generates presigned PUT URLs against the per-database bucketconfirmUploadnow verifies uploads (HeadObject) against the per-database bucketdownloadUrlfield restructured: resolvesstorageConfigbefore building URLs, so public files use the per-databasepublicUrlPrefixand private files use the per-database bucket for presigned GET URLscreateBucketNameResolver()inpresigned-url-resolver.tsderives bucket names as{BUCKET_NAME}-{databaseId}ConstructivePresetReview & Testing Checklist for Human
createBucketNameResolveralways appends-{databaseId}— the JSDoc claims it "returns the prefix as-is when it looks like a local dev bucket" but the implementation does${prefix}-${databaseId}unconditionally. This will break local MinIO dev (buckettest-bucket-{uuid}won't exist). Either the code or the comment needs fixing — likely needs a guard for local dev or the MinIO setup needs to auto-create per-database buckets.resolveS3ForDatabaseis duplicated in bothplugin.tsanddownload-url-field.ts. Verify both copies stay in sync, or extract to a shared module.publicUrlPrefixreturned immediately without any DB query. Now ALL files trigger awithPgClientcall to resolve per-database config. Verify this is acceptable for your traffic patterns.{prefix}-{bucketKey}(e.g.,myapp-public), but this resolver uses{prefix}-{databaseId}. Verify these are intentionally different naming schemes, or reconcile them.requestUploadUrl, confirm viaconfirmUpload, then fetchdownloadUrl— verify the presigned URLs and public URLs target the correct per-database bucket. Test with at least two databases to confirm isolation.Notes
endpointis NOT yet per-database — the S3 client (and thus the endpoint it connects to) remains global. ThestorageConfig.endpointcolumn exists in the DB but is not used to create per-database S3 clients. This would require an S3 client pool, deferred to future work.storage_module.Link to Devin session: https://app.devin.ai/sessions/e47513cf8b974ae6985c42c0a657e4d7
Requested by: @pyramation